4 research outputs found

    Limitations of perturbative techniques in the analysis of rhythms and oscillations

    Get PDF
    Perturbation theory is an important tool in the analysis of oscillators and their response to external stimuli. It is predicated on the assumption that the perturbations in question are “sufficiently weak”, an assumption that is not always valid when perturbative methods are applied. In this paper, we identify a number of concrete dynamical scenarios in which a standard perturbative technique, based on the infinitesimal phase response curve (PRC), is shown to give different predictions than the full model. Shear-induced chaos, i.e., chaotic behavior that results from the amplification of small perturbations by underlying shear, is missed entirely by the PRC. We show also that the presence of “sticky” phase–space structures tend to cause perturbative techniques to overestimate the frequencies and regularity of the oscillations. The phenomena we describe can all be observed in a simple 2D neuron model, which we choose for illustration as the PRC is widely used in mathematical neuroscience

    Design of Poisoning Attacks on Linear Regression Using Bilevel Optimization

    No full text
    Poisoning attack is one of the attack types commonly studied in the field of adversarial machine learning. The adversary generating poison attacks is assumed to have access to the training process of a machine learning algorithm and aims to prevent the algorithm from functioning properly by injecting manipulative data while the algorithm is being trained. In this work, our focus is on poisoning attacks against linear regression models which target to weaken the prediction power of the attacked regression model. We propose a bilevel optimization problem to model this adversarial process between the attacker generating poisoning attacks and the learner which tries to learn the best predictive regression model. We give an alternative single level optimization problem by benefiting from the optimality conditions of the learner's problem. A commercial solver is used to solve the resulting single level optimization problem where we generate the whole set of poisoning attack samples at once. Besides, an iterative approach that allows to determine only a portion of poisoning attack samples at every iteration is introduced. The proposed attack strategies are shown to be superior than a benchmark algorithm from the literature by carrying out extensive experiments on two realistic datasets

    Assignment problem with conflicts

    No full text
    \u3cp\u3eWe focus on an extension of the assignment problem with additional conflict (pair) constraints in conjunction with the assignment constraints and binary restrictions. Given a bipartite graph with a cost associated with each edge and a conflict set of edge pairs, the assignment problem with conflict constraints corresponds to finding a minimum weight perfect matching without any conflicting edge pair. For example, some chemicals cannot be processed on close processors, food and toxic products cannot be stored neighboring locations at the same storage area, and machines cannot be sent to process jobs without satisfying some spatial constraints. Unlike the well-known assignment problem, this problem is NP-hard. We first introduce a realistic special class and demonstrate its polynomial solvability. Then, we propose a Branch-and-Bound algorithm and a Russian Doll Search algorithm using the assignment problem relaxations for lower bound computations, and introduce combinatorial branching rules based on the conflicting edges in an optimal solution of the relaxations. According to the extensive computational experiments we can say that the proposed algorithms are very efficient.\u3c/p\u3

    Design of Poisoning Attacks on Linear Regression Using Bilevel Optimization

    No full text
    Poisoning attack is one of the attack types commonly studied in the field of adversarial machine learning. The adversary generating poison attacks is assumed to have access to the training process of a machine learning algorithm and aims to prevent the algorithm from functioning properly by injecting manipulative data while the algorithm is being trained. In this work, our focus is on poisoning attacks against linear regression models which target to weaken the prediction power of the attacked regression model. We propose a bilevel optimization problem to model this adversarial process between the attacker generating poisoning attacks and the learner which tries to learn the best predictive regression model. We give an alternative single level optimization problem by benefiting from the optimality conditions of the learner's problem. A commercial solver is used to solve the resulting single level optimization problem where we generate the whole set of poisoning attack samples at once. Besides, an iterative approach that allows to determine only a portion of poisoning attack samples at every iteration is introduced. The proposed attack strategies are shown to be superior than a benchmark algorithm from the literature by carrying out extensive experiments on two realistic datasets
    corecore